Regula Falsi
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the ''regula falsi'', method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the
trial and error Trial and error is a fundamental method of problem-solving characterized by repeated, varied attempts which are continued until success, or until the practicer stops trying. According to W.H. Thorpe, the term was devised by C. Lloyd Morgan (1 ...
technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of
algebra Algebra () is one of the broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathematics. Elementary a ...
and the use of
equation In mathematics, an equation is a formula that expresses the equality of two expressions, by connecting them with the equals sign . The word ''equation'' and its cognates in other languages may have subtly different meanings; for example, in ...
s. As an example, consider problem 26 in the
Rhind papyrus The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum 10057 and pBM 10058) is one of the best known examples of ancient Egyptian mathematics. It is named after Alexander Henry Rhind, a Scottish antiquarian, who purchased ...
, which asks for a solution of (written in modern notation) the equation . This is solved by false position. First, guess that to obtain, on the left, . This guess is a good choice since it produces an integer value. However, 4 is not the solution of the original equation, as it gives a value which is three times too small. To compensate, multiply (currently set to 4) by 3 and substitute again to get , verifying that the solution is . Modern versions of the technique employ systematic ways of choosing new test values and are concerned with the questions of whether or not an approximation to a solution can be obtained, and if it can, how fast can the approximation be found.


Two historical types

Two basic types of false position method can be distinguished historically, ''simple false position'' and ''double false position''. ''Simple false position'' is aimed at solving problems involving direct proportion. Such problems can be written algebraically in the form: determine such that :ax = b , if and are known. The method begins by using a test input value , and finding the corresponding output value by multiplication: . The correct answer is then found by proportional adjustment, . ''Double false position'' is aimed at solving more difficult problems that can be written algebraically in the form: determine such that :f(x) = ax + c = 0 , if it is known that :f(x_1) = b_1, \qquad f(x_2) = b_2 . Double false position is mathematically equivalent to
linear interpolation In mathematics, linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points. Linear interpolation between two known points If the two known poin ...
. By using a pair of test inputs and the corresponding pair of outputs, the result of this
algorithm In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algorithms are used as specificat ...
given by, : x = \frac, would be memorized and carried out by rote. Indeed, the rule as given by
Robert Recorde Robert Recorde () was an Anglo-Welsh physician and mathematician. He invented the equals sign (=) and also introduced the pre-existing plus and minus signs, plus sign (+) to English speakers in 1557. Biography Born around 1512, Robert Recorde w ...
in his ''Ground of Artes'' (c. 1542) is: :::Gesse at this woorke as happe doth leade. :::By chaunce to truthe you may procede. :::And firste woorke by the question, :::Although no truthe therein be don. :::Suche falsehode is so good a grounde, :::That truth by it will soone be founde. :::: From many bate to many mo, :::From to fewe take to fewe also. :::With to much ioyne to fewe againe, :::To to fewe adde to manye plaine. :::In crossewaies multiplye contrary kinde, :::All truthe by falsehode for to fynde. For an affine
linear function In mathematics, the term linear function refers to two distinct but related notions: * In calculus and related areas, a linear function is a function (mathematics), function whose graph of a function, graph is a straight line, that is, a polynomia ...
, :f(x) = ax + c , double false position provides the exact solution, while for a
nonlinear In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other ...
function it provides an
approximation An approximation is anything that is intentionally similar but not exactly equality (mathematics), equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very ...
that can be successively improved by
iteration Iteration is the repetition of a process in order to generate a (possibly unbounded) sequence of outcomes. Each repetition of the process is a single iteration, and the outcome of each iteration is then the starting point of the next iteration. ...
.


History

The simple false position technique is found in
cuneiform Cuneiform is a logo-syllabic script that was used to write several languages of the Ancient Middle East. The script was in active use from the early Bronze Age until the beginning of the Common Era. It is named for the characteristic wedge-sha ...
tablets from ancient
Babylonian mathematics Babylonian mathematics (also known as ''Assyro-Babylonian mathematics'') are the mathematics developed or practiced by the people of Mesopotamia, from the days of the early Sumerians to the centuries following the fall of Babylon in 539 BC. Babyl ...
, and in
papyri Papyrus ( ) is a material similar to thick paper that was used in ancient times as a writing surface. It was made from the pith of the papyrus plant, ''Cyperus papyrus'', a wetland sedge. ''Papyrus'' (plural: ''papyri'') can also refer to a d ...
from ancient
Egyptian mathematics Ancient Egyptian mathematics is the mathematics that was developed and used in Ancient Egypt 3000 to c. , from the Old Kingdom of Egypt until roughly the beginning of Hellenistic Egypt. The ancient Egyptians utilized a numeral system for count ...
. Double false position arose in
late antiquity Late antiquity is the time of transition from classical antiquity to the Middle Ages, generally spanning the 3rd–7th century in Europe and adjacent areas bordering the Mediterranean Basin. The popularization of this periodization in English ha ...
as a purely arithmetical algorithm. In the ancient Chinese mathematical text called ''
The Nine Chapters on the Mathematical Art ''The Nine Chapters on the Mathematical Art'' () is a Chinese mathematics book, composed by several generations of scholars from the 10th–2nd century BCE, its latest stage being from the 2nd century CE. This book is one of the earliest sur ...
'' (九章算術), dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call
secant line Secant is a term in mathematics derived from the Latin ''secare'' ("to cut"). It may refer to: * a secant line, in geometry * the secant variety, in algebraic geometry * secant (trigonometry) (Latin: secans), the multiplicative inverse (or reciproc ...
s on a
conic section In mathematics, a conic section, quadratic curve or conic is a curve obtained as the intersection of the surface of a cone with a plane. The three types of conic section are the hyperbola, the parabola, and the ellipse; the circle is a specia ...
. A more typical example is this "joint purchase" problem involving an "excess and deficit" condition:
Now an item is purchased jointly; everyone contributes 8 oins the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53.
Between the 9th and 10th centuries, the
Egyptian Egyptian describes something of, from, or related to Egypt. Egyptian or Egyptians may refer to: Nations and ethnic groups * Egyptians, a national group in North Africa ** Egyptian culture, a complex and stable culture with thousands of years of ...
mathematician
Abu Kamil Abū Kāmil Shujāʿ ibn Aslam ibn Muḥammad Ibn Shujāʿ ( Latinized as Auoquamel, ar, أبو كامل شجاع بن أسلم بن محمد بن شجاع, also known as ''Al-ḥāsib al-miṣrī''—lit. "the Egyptian reckoner") (c. 850 – ...
wrote a now-lost treatise on the use of double false position, known as the ''Book of the Two Errors'' (''Kitāb al-khaṭāʾayn''). The oldest surviving writing on double false position from the
Middle East The Middle East ( ar, الشرق الأوسط, ISO 233: ) is a geopolitical region commonly encompassing Arabian Peninsula, Arabia (including the Arabian Peninsula and Bahrain), Anatolia, Asia Minor (Asian part of Turkey except Hatay Pro ...
is that of
Qusta ibn Luqa Qusta ibn Luqa (820–912) (Costa ben Luca, Constabulus) was a Syrian Melkite Christian physician, philosopher, astronomer, mathematician and translator. He was born in Baalbek. Travelling to parts of the Byzantine Empire, he brought back Greek te ...
(10th century), an
Arab The Arabs (singular: Arab; singular ar, عَرَبِيٌّ, DIN 31635: , , plural ar, عَرَب, DIN 31635: , Arabic pronunciation: ), also known as the Arab people, are an ethnic group mainly inhabiting the Arab world in Western Asia, ...
mathematician from
Baalbek Baalbek (; ar, بَعْلَبَكّ, Baʿlabakk, Syriac-Aramaic: ܒܥܠܒܟ) is a city located east of the Litani River in Lebanon's Beqaa Valley, about northeast of Beirut. It is the capital of Baalbek-Hermel Governorate. In Greek and Roman ...
,
Lebanon Lebanon ( , ar, لُبْنَان, translit=lubnān, ), officially the Republic of Lebanon () or the Lebanese Republic, is a country in Western Asia. It is located between Syria to the north and east and Israel to the south, while Cyprus li ...
. He justified the technique by a formal, Euclidean-style geometric proof. Within the tradition of medieval Muslim mathematics, double false position was known as ''hisāb al-khaṭāʾayn'' ("reckoning by two errors"). It was used for centuries to solve practical problems such as commercial and juridical questions (estate partitions according to rules of Quranic inheritance), as well as purely recreational problems. The algorithm was often memorized with the aid of
mnemonics A mnemonic ( ) device, or memory device, is any learning technique that aids information retention or retrieval (remembering) in the human memory for better understanding. Mnemonics make use of elaborative encoding, retrieval cues, and imagery ...
, such as a verse attributed to
Ibn al-Yasamin Abu Muhammad 'Abdallah ibn Muhammad ibn Hajjaj ibn al-Yasmin al-Adrini al-Fessi () (died 1204) more commonly known as ibn al-Yasmin, was a Berber mathematician, born in Morocco and he received his education in Fez and Sevilla. Little is known of ...
and balance-scale diagrams explained by
al-Hassar Al-Hassar or Abu Bakr Muhammad ibn Abdallah ibn Ayyash al-Hassar ( ar, أبو بكر محمد ابن عياش الحصَار) was a 12th-century Moroccan mathematician. He is the author of two books ''Kitab al-bayan wat-tadhkar'' (Book of Demonstr ...
and
Ibn al-Banna Ibn al‐Bannāʾ al‐Marrākushī ( ar, ابن البناء المراكشي), full name: Abu'l-Abbas Ahmad ibn Muhammad ibn Uthman al-Azdi al-Marrakushi () (29 December 1256 – 31 July 1321), was a Moroccan polymath who was active as a math ...
, all three being mathematicians of Moroccan origin. Available online at: http://facstaff.uindy.edu/~oaks/Biblio/COMHISMA8paper.doc and Leonardo of Pisa (
Fibonacci Fibonacci (; also , ; – ), also known as Leonardo Bonacci, Leonardo of Pisa, or Leonardo Bigollo Pisano ('Leonardo the Traveller from Pisa'), was an Italian mathematician from the Republic of Pisa, considered to be "the most talented Western ...
) devoted Chapter 13 of his book ''
Liber Abaci ''Liber Abaci'' (also spelled as ''Liber Abbaci''; "The Book of Calculation") is a historic 1202 Latin manuscript on arithmetic by Leonardo of Pisa, posthumously known as Fibonacci. ''Liber Abaci'' was among the first Western books to describe ...
'' (AD 1202) to explaining and demonstrating the uses of double false position, terming the method ''regulis elchatayn'' after the ''al-khaṭāʾayn'' method that he had learned from
Arab The Arabs (singular: Arab; singular ar, عَرَبِيٌّ, DIN 31635: , , plural ar, عَرَب, DIN 31635: , Arabic pronunciation: ), also known as the Arab people, are an ethnic group mainly inhabiting the Arab world in Western Asia, ...
sources. In 1494,
Pacioli Fra Luca Bartolomeo de Pacioli (sometimes ''Paccioli'' or ''Paciolo''; 1447 – 19 June 1517) was an Italian mathematician, Franciscan friar, collaborator with Leonardo da Vinci, and an early contributor to the field now known as accounting ...
used the term ''el cataym'' in his book ''
Summa de arithmetica ' (''Summary of arithmetic, geometry, proportions and proportionality'') is a book on mathematics written by Luca Pacioli and first published in 1494. It contains a comprehensive summary of Renaissance mathematics, including practical arithmeti ...
'', probably taking the term from Fibonacci. Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular. For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556. Pacioli's term nearly disappeared in the 16th century European works and the technique went by various names such as "Rule of False", "Rule of Position" and "Rule of False Position". ''Regula Falsi'' appears as the Latinized version of Rule of False as early as 1690. Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth. For instance, in 1568 Humphrey Baker says:


Numerical analysis

The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions. However, in
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, double false position became a
root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
used in iterative numerical approximation techniques. Many equations, including most of the more complicated ones, can be solved only by iterative numerical approximation. This consists of trial and error, in which various values of the unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the procedure, a new estimate for the solution. There are many ways to arrive at a calculated-estimate and ''regula falsi'' provides one of these. Given an equation, move all of its terms to one side so that it has the form, , where is some function of the unknown variable . A value that satisfies this equation, that is, , is called a ''root'' or ''zero'' of the function and is a solution of the original equation. If is a
continuous function In mathematics, a continuous function is a function such that a continuous variation (that is a change without jump) of the argument induces a continuous variation of the value of the function. This means that there are no abrupt changes in value ...
and there exist two points and such that and are of opposite signs, then, by the
intermediate value theorem In mathematical analysis, the intermediate value theorem states that if f is a continuous function whose domain contains the interval , then it takes on any given value between f(a) and f(b) at some point within the interval. This has two import ...
, the function has a root in the interval . There are many
root-finding algorithm In mathematics and computing, a root-finding algorithm is an algorithm for finding zeros, also called "roots", of continuous functions. A zero of a function , from the real numbers to real numbers or from the complex numbers to the complex numbers ...
s that can be used to obtain approximations to such a root. One of the most common is
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
, but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function's
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. F ...
. Other methods are needed and one general class of methods are the ''two-point bracketing methods''. These methods proceed by producing a sequence of shrinking intervals , at the th step, such that contains a root of .


Two-point bracketing methods

These methods start with two -values, initially found by trial-and-error, at which has opposite signs. Under the continuity assumption, a root of is guaranteed to lie between these two values, that is to say, these values "bracket" the root. A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root. If is the point selected, then the smaller interval goes from to the endpoint where has the sign opposite that of . In the improbable case that , a root has been found and the algorithm stops. Otherwise, the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy. The point selected in any current interval can be thought of as an estimate of the solution. The different variations of this method involve different ways of calculating this solution estimate. Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution, a guarantee not available with other root finding methods such as
Newton's method In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
or the
secant method In numerical analysis, the secant method is a root-finding algorithm that uses a succession of roots of secant lines to better approximate a root of a function ''f''. The secant method can be thought of as a finite-difference approximation of ...
. The simplest variation, called the
bisection method In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and the ...
, calculates the solution estimate as the
midpoint In geometry, the midpoint is the middle point of a line segment. It is equidistant from both endpoints, and it is the centroid both of the segment and of the endpoints. It bisects the segment. Formula The midpoint of a segment in ''n''-dimens ...
of the bracketing interval. That is, if at step , the current bracketing interval is , then the new solution estimate is obtained by, :c_k=\frac. This ensures that is between and , thereby guaranteeing convergence toward the solution. Since the bracketing interval's length is halved at each step, the bisection method's error is, on average, halved with each iteration. Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy.


The ''regula falsi'' (false position) method

The convergence rate of the bisection method could possibly be improved by using a different solution estimate. The ''regula falsi'' method calculates the new solution estimate as the -intercept of the
line segment In geometry, a line segment is a part of a straight line that is bounded by two distinct end points, and contains every point on the line that is between its endpoints. The length of a line segment is given by the Euclidean distance between ...
joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment. More precisely, suppose that in the -th iteration the bracketing interval is . Construct the line through the points and , as illustrated. This line is a secant or chord of the graph of the function . In point-slope form, its equation is given by : y - f(b_k) = \frac (x-b_k). Now choose to be the -intercept of this line, that is, the value of for which , and substitute these values to obtain : f(b_k) + \frac (c_k-b_k) = 0. Solving this equation for ''c''''k'' gives: : c_k = b_k - f(b_k) \frac = \frac. This last symmetrical form has a computational advantage: As a solution is approached, and will be very close together, and nearly always of the same sign. Such a subtraction can lose significant digits. Because and are always of opposite sign the “subtraction” in the numerator of the improved formula is effectively an addition (as is the subtraction in the denominator too). At iteration number , the number is calculated as above and then, if and have the same sign, set and , otherwise set and . This process is repeated until the root is approximated sufficiently well. The above formula is also used in the secant method, but the secant method always retains the last two computed points, and so, while it is slightly faster, it does not preserve bracketing and may not converge. The fact that ''regula falsi'' always converges, and has versions that do well at avoiding slowdowns, makes it a good choice when speed is needed. However, its rate of convergence can drop below that of the bisection method.


Analysis

Since the initial end-points and are chosen such that and are of opposite signs, at each step, one of the end-points will get closer to a root of . If the second derivative of is of constant sign (so there is no
inflection point In differential calculus and differential geometry, an inflection point, point of inflection, flex, or inflection (British English: inflexion) is a point on a smooth plane curve at which the curvature changes sign. In particular, in the case of ...
) in the interval, then one endpoint (the one where also has the same sign) will remain fixed for all subsequent iterations while the converging endpoint becomes updated. As a result, unlike the
bisection method In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and the ...
, the width of the bracket does not tend to zero (unless the zero is at an inflection point around which ). As a consequence, the linear approximation to , which is used to pick the false position, does not improve as rapidly as possible. One example of this phenomenon is the function : f(x) = 2x^3-4x^2+3x on the initial bracket minus;1,1 The left end, −1, is never replaced (it does not change at first and after the first three iterations, is negative on the interval) and thus the width of the bracket never falls below 1. Hence, the right endpoint approaches 0 at a linear rate (the number of accurate digits grows linearly, with a
rate of convergence In numerical analysis, the order of convergence and the rate of convergence of a convergent sequence are quantities that represent how quickly the sequence approaches its limit. A sequence (x_n) that converges to x^* is said to have ''order of c ...
of 2/3). For discontinuous functions, this method can only be expected to find a point where the function changes sign (for example at for or the
sign function In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is an odd mathematical function that extracts the sign of a real number. In mathematical expressions the sign function is often represented as . To avoi ...
). In addition to sign changes, it is also possible for the method to converge to a point where the limit of the function is zero, even if the function is undefined (or has another value) at that point (for example at for the function given by when and by , starting with the interval 0.5, 3.0. It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change, but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change, for example at in :f(x) = \frac + \frac. The
method of bisection In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and t ...
avoids this hypothetical convergence problem.


Improvements in ''regula falsi''

Though ''regula falsi'' always converges, usually considerably faster than bisection, there are situations that can slow its convergence – sometimes to a prohibitive degree. That problem isn't unique to ''regula falsi'': Other than bisection, ''all'' of the numerical equation-solving methods can have a slow-convergence or no-convergence problem under some conditions. Sometimes, Newton's method and the secant method ''diverge'' instead of converging – and often do so under the same conditions that slow ''regula falsi's'' convergence. But, though ''regula falsi'' is one of the best methods, and even in its original un-improved version would often be the best choice; for example, when Newton's isn't used because the derivative is prohibitively time-consuming to evaluate, or when Newton's and ''Successive-Substitutions'' have failed to converge. ''Regula falsi's'' failure mode is easy to detect: The same end-point is retained twice in a row. The problem is easily remedied by picking instead a modified false position, chosen to avoid slowdowns due to those relatively unusual unfavorable situations. A number of such improvements to ''regula falsi'' have been proposed; two of them, the Illinois algorithm and the Anderson–Björk algorithm, are described below.


The Illinois algorithm

The Illinois algorithm halves the -value of the retained end point in the next estimate computation when the new -value (that is, )) has the same sign as the previous one ()), meaning that the end point of the previous step will be retained. Hence: : c_k = \frac or : c_k = \frac, down-weighting one of the endpoint values to force the next to occur on that side of the function. The factor ½ used above looks arbitrary, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442). There are other ways to pick the rescaling which give even better superlinear convergence rates. The above adjustment to ''regula falsi'' is called the Illinois algorithm by some scholars. Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.


Anderson–Björck algorithm

Suppose that in the -th iteration the bracketing interval is and that the functional value of the new calculated estimate has the same sign as . In this case, the new bracketing interval and the left-hand endpoint has been retained. (So far, that's the same as ordinary Regula Falsi and the Illinois algorithm.) But, whereas the Illinois algorithm would multiply by , Anderson–Björck algorithm multiplies it by , where has one of the two following values: \begin m' &= 1 - \frac,\\ m &= \begin m' & \text m' > 0, \\ \frac & \text \end \end For simple roots, Anderson–Björck performs very well in practice.


ITP method

Given \kappa_1\in (0,\infty), \kappa_2 \in \left[1,1+\phi\right) , n_ \equiv \lceil(b_0-a_0)/2\epsilon\rceil and n_0\in[0,\infty) where \phi is the golden ration \tfrac(1+\sqrt) , in each iteration j = 0,1,2... the ITP method calculates the point x_ following three steps: # ''[Interpolation Step] Calculate the bisection and the regula falsi points: x_ \equiv \frac '' and x_f \equiv \frac ; # ''[Truncation Step] Perturb the estimator towards the center: x_t \equiv x_f+\sigma \delta '' where ''\sigma \equiv \text(x_-x_f) '' and ''\delta \equiv \min\ '' ; # '' rojection StepProject the estimator to minmax interval: x_ \equiv x_ -\sigma \rho_k where \rho_k \equiv \min\left\ .'' The value of the function f(x_) on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. This three step procedure guarantees that the minmax properties of the bisection method are enjoyed by the estimate as well as the superlinear convergence of the secant method. And, is observed to outperform both bisection and interpolation based methods under smooth and non-smooth functions.


Practical considerations

When solving one equation, or just a few, using a computer, the bisection method is an adequate choice. Although bisection isn't as fast as the other methods—when they're at their best and don't have a problem—bisection nevertheless is guaranteed to converge at a useful rate, roughly halving the error with each iteration – gaining roughly a decimal place of accuracy with every 3 iterations. For manual calculation, by calculator, one tends to want to use faster methods, and they usually, but not always, converge faster than bisection. But a computer, even using bisection, will solve an equation, to the desired accuracy, so rapidly that there's no need to try to save time by using a less reliable method—and every method is less reliable than bisection. An exception would be if the computer program had to solve equations very many times during its run. Then the time saved by the faster methods could be significant. Then, a program could start with Newton's method, and, if Newton's isn't converging, switch to ''regula falsi'', maybe in one of its improved versions, such as the Illinois or Anderson–Björck versions. Or, if even that isn't converging as well as bisection would, switch to bisection, which always converges at a useful, if not spectacular, rate. When the change in has become very small, and is also changing very little, then Newton's method most likely will not run into trouble, and will converge. So, under those favorable conditions, one could switch to Newton's method if one wanted the error to be very small and wanted very fast convergence.


Example: growth of bulrush

In Chapter 7 of ''The Nine Chapters'', a root finding problem can be translated to modern language as follows:
Excess And Deficit Problem #11: * A
bulrush Bulrush is a vernacular name for several large wetland grass-like plants *Sedge family (Cyperaceae): **''Cyperus'' **''Scirpus'' **'' Blysmus'' **''Bolboschoenus'' **''Scirpoides'' **''Isolepis'' **''Schoenoplectus'' **''Trichophorum'' *Typhacea ...
grew 3 units on its first day. At the end of each day, the plant is observed to have grown by 1/2 of the previous day's growth. * A club-rush grew 1 unit on its first day. At the end of each day, the plant has grown by 2 times as much as the previous day's growth. * Find the time '' n fractional days' that the club-rush becomes as tall as the bulrush. Answer: (2 + \frac) days; the height is (4 + \frac + \frac) units. Explanation: * Suppose it is day 2. The club-rush is shorter than the bulrush by 1.5 units. * Suppose it is day 3. The club-rush is taller than the bulrush by 1.75 units. ∎
To understand this, we shall model the heights of the plants on day ''n'' (''n'' = 1, 2, 3...) after a
geometric series In mathematics, a geometric series is the sum of an infinite number of terms that have a constant ratio between successive terms. For example, the series :\frac \,+\, \frac \,+\, \frac \,+\, \frac \,+\, \cdots is geometric, because each succ ...
. :B(n) = \sum_^n 3 \cdot \frac \quadBulrush
:C(n) = \sum_^n 1 \cdot 2^ \quadClub-rush For the sake of better notations, let k = i-1. Rewrite the plant height series B(n),\ C(n) in terms of k and invoke the sum formula. :B(n) = \sum_^ 3 \cdot \frac = 3 \left( \frac \right) = 6 \left( 1 - \frac \right)
:C(n) = \sum_^ 2^k = \frac = 2^n - 1 Now, use ''regula falsi'' to find the root of (C(n) - B(n)) :F(n) := C(n) - B(n) = \frac + 2^n - 7 Set x_1 = 2 and compute F(x_1) = F(2) which equals -1.5 (the "deficit").
Set x_2 = 3 and compute F(x_2) = F(3) which equals 1.75 (the "excess"). Estimated root (1st iteration): :\hat = \frac = \frac \approx 2.4615


Example code

This example program, written in the
C programming language ''The C Programming Language'' (sometimes termed ''K&R'', after its authors' initials) is a computer programming book written by Brian Kernighan and Dennis Ritchie, the latter of whom originally designed and implemented the language, as well as ...
, is an example of the Illinois algorithm. To find the positive number where , the equation is transformed into a root-finding form . #include #include double f(double x) /* a,b: endpoints of an interval where we search e: half of upper bound for relative error m: maximal number of iteration */ double FalsiMethod(double (*f)(double), double a, double b, double e, int m) int main(void) After running this code, the final answer is approximately 0.865474033101614.


See also

* ITP method, a variation with guaranteed minmax and superlinear convergence *
Ridders' method In numerical analysis, Ridders' method is a root-finding algorithm based on the false position method and the use of an exponential function to successively approximate a root of a continuous function f(x). The method is due to C. Ridders. Ridders' ...
, another root-finding method based on the false position method *
Brent's method In numerical analysis, Brent's method is a hybrid root-finding algorithm combining the bisection method, the secant method and inverse quadratic interpolation. It has the reliability of bisection but it can be as quick as some of the less-reliable ...


References


Further reading

* * * (On a previously unpublished treatise on Double False Position in a medieval Arabic manuscript.) {{Root-finding algorithms Root-finding algorithms Latin words and phrases Articles with example C code